Easy2Siksha.com
GNDU QUESTION PAPERS 2022
BA/BSc 4
th
SEMESTER
COMPUTER SCIENCE
(Data Structure & Programming Language Using C++)
Time Allowed: 3 Hours Maximum Marks: 75
Note: Aempt Five quesons in all, selecng at least One queson from each secon. The
Fih queson may be aempted from any secon. All quesons carry equal marks.
SECTION-A
1. What is an Array? What are the various operaons performed on an array? What are
the advantages and disadvantages of using arrays ? Write an algorithm to insert an
element in an array.
2. What do you mean by data and data structure? What are dierent classicaons of data
structures? What is need of algorithm complexity and how it is evaluated? Describe any
notaon used to represent algorithm complexity.
SECTION-B
3. Dene Queue. How are inseron and deleon operaons performed over a queue?
How Queue can be implemented using an array and linked list ? Explain.
4. (A) Explain the Quick sort algorithm with an example.
(B) What is Polish Notaon? Explain with help of an example.
Easy2Siksha.com
SECTION-C
5. What is selecon sort? Write down the algorithm to sort a list using selecon sort.
Discuss its complexity.
6. What is binary search? How does a binary search algorithm work? Give the syntax by
taking an example. Write and explain an algorithm for searching an element
using binary search.
SECTION-D
7. What is polymorphism ? What are its types ? Write and explain C program to implement
the concept of polymorphism.
8. (A) What do you mean by Operator Overloading? Explain with the help of an example.
(B) What is a class? What is the relaon between an object and a class? Write a program
which shows how to dene a class, how to access member funcons and how to create
and access objects in C++?
Easy2Siksha.com
GNDU ANSWER PAPERS 2022
BA/BSc 4
th
SEMESTER
COMPUTER SCIENCE
(Data Structure & Programming Language Using C++)
Time Allowed: 3 Hours Maximum Marks: 75
Note: Aempt Five quesons in all, selecng at least One queson from each secon. The
Fih queson may be aempted from any secon. All quesons carry equal marks.
SECTION-A
1. What is an Array? What are the various operaons performed on an array? What are
the advantages and disadvantages of using arrays ? Write an algorithm to insert an
element in an array.
Ans: 󷈷󷈸󷈹󷈺󷈻󷈼 What is an Array?
An array is a collection of similar type of data items stored together in continuous memory
locations.
This simply means:
If you want to store 10 numbers, instead of making 10 separate variables, you keep
them in one group.
Each element is stored side-by-side in memory.
Every element can be accessed using its index number (position number).
For example, if you store marks of 5 students:
Marks[0] Marks[1] Marks[2] Marks[3] Marks[4]
Indexes usually start from 0, not 1.
So instead of writing:
mark1, mark2, mark3, mark4, mark5
Easy2Siksha.com
We simply write:
marks[5]
Simple, neat, and organized!
󺬣󺬡󺬢󺬤 Operations Performed on Arrays
Just like we perform various actions with cupboardsputting something, taking something,
searching for somethingarrays also allow different operations.
󷄧󷄫 Traversing
This means visiting each element one by one.
Example: Printing all marks of students.
󷄧󷄬 Insertion
This means adding a new element into the array.
But inserting in arrays is not always easy if the array is already full, because every element is
placed in continuous memory. Sometimes we must shift elements to make space.
󷄧󷄭 Deletion
This means removing an element from the array.
When we delete something, we again may need to shift elements so that the empty space is
filled.
󷄧󷄮 Searching
This means finding whether a particular value is present or not.
There are two types:
Linear Search (checking one by one)
Binary Search (fast but only works on sorted arrays)
󷄰󷄯 Updating
Easy2Siksha.com
This means changing the value of an existing element.
Example: Updating marks when a teacher corrects a mistake.
󽆤 Advantages of Arrays
Arrays are very helpful in many ways:
󷷷󷷸 1. Easy to Use
Arrays are simple in concept and easy to understand. Accessing elements is straightforward
using index.
󷷷󷷸 2. Fast Access (Random Access)
If you want the 5th element, the computer directly jumps to it using its index. No need to
check each previous element.
󷷷󷷸 3. Organized Storage
When many values of the same type are stored in one place, handling becomes very easy
and structured.
󷷷󷷸 4. Saves Memory (Compared to Separate Variables)
Instead of declaring many separate variables, one single array stores them neatly.
󽆱 Disadvantages of Arrays
Just like everything has pros and cons, arrays also have some limitations.
󷷹󷷺 1. Fixed Size
Once you declare the size of an array, it cannot be changed.
If you need more space, bad luck!
If you don’t use complete space, memory gets wasted.
Easy2Siksha.com
󷷹󷷺 2. Difficult Insertion and Deletion
If you insert or delete elements in the middle, many elements must be shifted. This makes
operations slow.
󷷹󷷺 3. Same Data Type Only
All elements must be of the same type (all integers, all characters, etc.). You cannot mix data
types.
󼩏󼩐󼩑 Algorithm to Insert an Element into an Array
Now let’s understand how to insert an element into an array step by step, like a small story.
Imagine you have a box storing 5 chocolates in order, and you suddenly want to place a new
chocolate in between. You’ll shift chocolates to the right to create space. Arrays also behave
like this.
󽆤 Algorithm: Insert an Element in an Array
Assume:
A is the array
n is the number of elements
pos is the position where we want to insert
item is the value to insert
Steps:
󷄧󷄫 Start
󷄧󷄬 Check if the array is already full
If full → insertion not possible.
󷄧󷄭 Set i = n 1
󷄧󷄮 Repeat while i >= pos
Move element A[i] to A[i + 1]
Decrease i by 1
Easy2Siksha.com
󷄰󷄯 Place item at position pos → A[pos] = item
󷄧󷄱 Increase n by 1 (array now has one more element)
󷄧󷄲 Stop
That’s it! You successfully inserted an element into the array.
󷘹󷘴󷘵󷘶󷘷󷘸 Quick Real-Life Connection
Think about a queue in a cinema hall. If someone wants to insert themselves in the middle:
People behind shift backward
The new person fits in
That’s exactly how array insertion works.
2. What do you mean by data and data structure? What are dierent classicaons of data
structures? What is need of algorithm complexity and how it is evaluated? Describe any
notaon used to represent algorithm complexity.
Ans: 󷈷󷈸󷈹󷈺󷈻󷈼 Introduction
Imagine you are organizing your study desk. You have books, pens, notes, and gadgets
scattered everywhere. If you simply pile them up randomly, you’ll waste time searching for
what you need. But if you arrange books on a shelf, pens in a holder, and notes in folders,
suddenly everything becomes easy to access.
󷷑󷷒󷷓󷷔 That’s exactly what data structures do in computer sciencethey organize data so that
it can be used efficiently. And just like arranging your desk, we also need to know how much
effort (time and space) it takes to organize and use this data. That’s where algorithm
complexity comes in.
Let’s break this down step by step.
󷈷󷈸󷈹󷈺󷈻󷈼 What is Data?
Definition: Data refers to raw facts, figures, or information that can be processed by
a computer.
Examples: Numbers (like 25, 100), text (“Hello”), images, audio, or even sensor
readings.
Data by itself is meaningless until it is organized or processed.
Easy2Siksha.com
󷷑󷷒󷷓󷷔 Think of data as the “ingredients” in a kitchen—flour, sugar, milk. They only become
useful when arranged into a recipe.
󷈷󷈸󷈹󷈺󷈻󷈼 What is a Data Structure?
Definition: A data structure is a way of organizing and storing data so that it can be
accessed and modified efficiently.
It defines the relationship between data items and provides operations to
manipulate them.
󷷑󷷒󷷓󷷔 Example: A list of student names can be stored in an array (fixed size, sequential
storage) or a linked list (flexible size, connected nodes). Both hold the same data but in
different structures, affecting how quickly you can search or update them.
󷈷󷈸󷈹󷈺󷈻󷈼 Classifications of Data Structures
Data structures can be classified into several categories:
1. Primitive Data Structures
Basic building blocks provided by programming languages.
Examples: Integer, Float, Character, Boolean.
Simple but essential for constructing more complex structures.
2. Non-Primitive Data Structures
These are more advanced and can be divided further:
a) Linear Data Structures
Data is arranged sequentially, one after another.
Examples:
o Arrays: Fixed-size collection of elements.
o Linked Lists: Elements connected by pointers, flexible size.
o Stacks: Follows LIFO (Last In, First Out).
o Queues: Follows FIFO (First In, First Out).
󷷑󷷒󷷓󷷔 Linear structures are like a queue at a shopone person after another.
b) Non-Linear Data Structures
Data is not arranged sequentially; elements can connect in multiple ways.
Examples:
o Trees: Hierarchical structure (like a family tree).
o Graphs: Network of nodes connected by edges (like social media
connections).
Easy2Siksha.com
󷷑󷷒󷷓󷷔 Non-linear structures are like a city maproads connect places in many directions.
c) Hash-Based Structures
Use hash functions to map data to specific locations.
Example: Hash Tables for fast searching.
󷷑󷷒󷷓󷷔 Like a library catalog that lets you jump directly to the book’s location.
󷈷󷈸󷈹󷈺󷈻󷈼 Why Do We Need Algorithm Complexity?
Now that we know how data is organized, the next question is: How efficiently can we use
it?
Different algorithms can solve the same problem, but some are faster or use less
memory.
Algorithm complexity helps us measure the efficiency of an algorithm in terms of:
1. Time Complexity: How much time an algorithm takes as input size grows.
2. Space Complexity: How much memory it uses.
󷷑󷷒󷷓󷷔 Example: Searching for a name in a phone book.
If you check each name one by one, it’s slow (linear search).
If you use a binary search (divide and conquer), it’s much faster. Algorithm
complexity tells us which method is better.
󷈷󷈸󷈹󷈺󷈻󷈼 How is Algorithm Complexity Evaluated?
1. Counting Basic Operations:
o We count the number of steps (comparisons, assignments) an algorithm
performs.
2. Input Size (n):
o Complexity is expressed as a function of input size.
o Example: Searching in an array of size n.
3. Worst, Best, and Average Cases:
o Worst Case: Maximum steps needed (e.g., searching for an element not
present).
o Best Case: Minimum steps (e.g., element found at the first position).
o Average Case: Expected steps over many trials.
󷷑󷷒󷷓󷷔 This evaluation helps us predict performance before actually running the program.
󷈷󷈸󷈹󷈺󷈻󷈼 Notations Used to Represent Algorithm Complexity
To express complexity, computer scientists use asymptotic notations. These notations
describe how an algorithm behaves as input size grows very large.
Easy2Siksha.com
1. Big O Notation (O)
Represents the upper bound (worst-case scenario).
Example: Linear search in an array → O(n).
Meaning: Time grows proportionally with input size.
󷷑󷷒󷷓󷷔 Think of Big O as the “maximum time you might wait.”
2. Omega Notation (Ω)
Represents the lower bound (best-case scenario).
Example: If the element is found at the first position in linear search Ω(1).
󷷑󷷒󷷓󷷔 Omega is the “minimum time you might wait.”
3. Theta Notation (Θ)
Represents the tight bound (average case).
Example: Binary search → Θ(log n).
󷷑󷷒󷷓󷷔 Theta is the “expected time you usually wait.”
󹶓󹶔󹶕󹶖󹶗󹶘 A Relatable Story
Imagine you’re at a supermarket:
If you search every aisle one by one for sugar, that’s O(n)slow.
If you know sugar is always in aisle 3, that’s Ω(1)instant.
If you usually find it by checking half the aisles, that’s Θ(n/2)average.
This is exactly how algorithm complexity works—it tells us how long we’ll spend searching,
depending on the situation.
󹵍󹵉󹵎󹵏󹵐 Summary Table
Concept
Explanation
Example
Data
Raw facts and figures
Numbers, text, images
Data Structure
Organized way to store data
Arrays, Trees, Graphs
Linear Structures
Sequential arrangement
Stack, Queue
Non-Linear
Hierarchical/network
Tree, Graph
Algorithm Complexity
Efficiency measure
Time & Space
Big O (O)
Worst-case
O(n) for linear search
Omega (Ω)
Best-case
Ω(1) for first element
Theta (Θ)
Average-case
Θ(log n) for binary search
󷇮󷇭 Final Thoughts
Easy2Siksha.com
Data and data structures are the backbone of computer sciencethey decide how
information is stored and accessed. But knowing how to organize data is not enough; we
must also measure how efficiently algorithms use it. That’s why algorithm complexity is
crucial.
By using notations like Big O, Omega, and Theta, we can predict performance, compare
algorithms, and choose the best one for the job.
SECTION-B
3. Dene Queue. How are inseron and deleon operaons performed over a queue?
How Queue can be implemented using an array and linked list ? Explain.
Ans: 󷄧󼿒 What is a Queue?
A Queue is a linear data structure that stores elements in an order. It works on the principle:
FIFO First In, First Out
The element that enters first will be the first one to leave.
Just like:
First person entering a bus = first person to exit
First request sent to a printer = first document printed
So, we say:
Insertion happens at the rear (back)
Deletion happens at the front
To make it easier:
The REAR is like the entry gate of the queue.
The FRONT is like the exit gate.
󷄧󼿒 Operations on a Queue
Two main operations are performed on a queue:
󷄧󷄫 Insertion (Enqueue Operation)
Easy2Siksha.com
Insertion means adding a new element to the queue.
But where do we add?
󷷑󷷒󷷓󷷔 Always at the REAR end.
Let’s say a queue currently has:
10, 20, 30
Here:
Front → 10
Rear → 30
If we insert 40, it goes behind 30:
10, 20, 30, 40
So now rear points to 40.
But insertion can happen only when space is available.
If the queue is full, we cannot insert, and this situation is called:
󽆱 Queue Overflow
󷄧󷄬 Deletion (Dequeue Operation)
Deletion means removing an element from the queue.
But who goes out?
󷷑󷷒󷷓󷷔 Always the FRONT element.
From queue:
10, 20, 30, 40
If we delete one element:
10 is removed.
New front becomes 20.
So remaining queue:
20, 30, 40
Easy2Siksha.com
Deletion cannot happen if the queue is already empty.
This situation is called:
󽆱 Queue Underflow
󷄧󼿒 Some Helpful Terms
Term
Meaning
Front
Position from where deletion occurs
Rear
Position where insertion occurs
Overflow
Queue is full; cannot insert
Underflow
Queue is empty; cannot delete
󷄧󼿒 How is a Queue Implemented?
Queues do not exist automatically; we create them using:
󷄧󷄫 Arrays
󷄧󷄬 Linked Lists
Let us understand both in a simple way.
󽇐 Queue Implementation Using Array
Think of an array as a fixed-size box containing compartments.
For example, an array of size 5 means we have 5 fixed places.
Index: 0 1 2 3 4
Queue: - - - - -
We maintain:
front pointer → start of queue
rear pointer → end of queue
Initially:
front = -1
rear = -1
󷷑󷷒󷷓󷷔 Insertion in Array Queue (Enqueue)
Easy2Siksha.com
Steps:
1. Check if the queue is full.
2. If full → Overflow
3. Else increase rear
4. Insert element at rear position
5. If inserting first element, set front = 0
Example:
Insert 10
front = 0
rear = 0
Queue: 10 - - - -
Insert 20
front = 0
rear = 1
Queue: 10 20 - - -
Insert 30
front = 0
rear = 2
Queue: 10 20 30 - -
󷷑󷷒󷷓󷷔 Deletion in Array Queue (Dequeue)
Steps:
1. Check if queue is empty
2. If empty → Underflow
3. Remove element at front
4. Increase front
From:
10 20 30
front = 0
rear = 2
Delete → 10 removed
front = 1
rear = 2
Queue: - 20 30 - -
Easy2Siksha.com
󽁔󽁕󽁖 Problem in Simple Array Queue
Even after deleting 10, we cannot use that empty space again.
When rear reaches the last index, queue becomes full even if empty spaces exist in front.
To solve this, we use Circular Queue, but that is another topic. For now, remember:
Array queue is simple
But has space wastage problem
Size is fixed
󽇐 Queue Implementation Using Linked List
A linked list is like a chain of nodes. Each node has:
Data
Address of next node
In queue using linked list:
front points to first node
rear points to last node
Unlike array:
No fixed size
Grows dynamically
No space wastage
󷷑󷷒󷷓󷷔 Insertion in Linked List Queue
Steps:
1. Create a new node
2. Put value in it
3. Link it at the end
4. Move rear to this new node
5. If queue was empty, front also points to same node
Example:
Insert 10
Easy2Siksha.com
Front → 10 → NULL
Rear → 10
Insert 20
Front → 10 → 20 → NULL
Rear → 20
Insert 30
Front → 10 → 20 → 30 → NULL
Rear → 30
󷷑󷷒󷷓󷷔 Deletion in Linked List Queue
Steps:
1. Check if queue empty
2. Remove node at front
3. Move front to next node
4. Free memory
Deleting 10:
Before: 10 → 20 → 30
After: 20 → 30
Front → 20
If last node gets deleted:
front becomes NULL
rear becomes NULL
Meaning queue is empty.
󷄧󼿒 Difference Between Array Queue and Linked List Queue
Feature
Array Queue
Size
Fixed
Memory
Continuous
Overflow
Possible if full
Implementation
Simple
Space wastage
yes
Easy2Siksha.com
󷘹󷘴󷘵󷘶󷘷󷘸 Where Do We Use Queue in Real Life and Computers?
Queues are extremely important in real-world systems and computer science.
Some examples:
󽆤 Waiting lines in banks, hospitals, ticket counters
󽆤 CPU scheduling
󽆤 Printing documents
󽆤 Handling customer support requests
󽆤 Call centre systems
󽆤 Traffic lights system
󽆤 Online food order processing
So queue helps in fair and organized processing.
󽇐 Final Summary
A Queue is a linear data structure
Works on FIFO First In First Out
Insertion happens at REAR
Deletion happens at FRONT
Two major errors: Overflow & Underflow
Can be implemented using:
Array (simple but fixed size)
Linked List (dynamic and efficient)
4. (A) Explain the Quick sort algorithm with an example.
(B) What is Polish Notaon? Explain with help of an example.
Ans: 󷈷󷈸󷈹󷈺󷈻󷈼 Introduction
Sorting and expression evaluation are two fundamental tasks in computer science. Sorting
helps us organize data efficiently, while expression evaluation allows us to compute results
in mathematics and programming. Two important concepts here are the Quick Sort
algorithm and Polish Notation.
Quick Sort is one of the fastest and most widely used sorting algorithms, while Polish
Notation is a clever way of writing mathematical expressions without needing brackets.
Let’s explore both in detail, step by step, with examples that make them easy to
understand.
Easy2Siksha.com
󷈷󷈸󷈹󷈺󷈻󷈼 Part A: Quick Sort Algorithm
1. What is Quick Sort?
Quick Sort is a divide-and-conquer algorithm used to sort elements in ascending or
descending order. It works by selecting a “pivot” element, partitioning the array around the
pivot, and then recursively sorting the partitions.
󷷑󷷒󷷓󷷔 Think of Quick Sort like organizing books on a shelf: you pick one book as a reference
(pivot), place smaller books on the left and larger ones on the right, and then repeat the
process for each side until everything is neatly arranged.
2. Steps of Quick Sort
1. Choose a Pivot: Select an element from the array (commonly the first, last, or middle
element).
2. Partition: Rearrange the array so that all elements smaller than the pivot are on its
left, and all greater elements are on its right.
3. Recursion: Apply Quick Sort recursively to the left and right sub-arrays.
4. Base Case: When the sub-array has one or zero elements, it is already sorted.
3. Example of Quick Sort
Let’s sort the array: [35, 12, 43, 8, 51]
Step 1: Choose Pivot Let’s pick the last element, 51, as the pivot.
Step 2: Partition
Elements smaller than 51 → [35, 12, 43, 8]
Pivot → 51
Elements greater than 51 → []
So the array becomes: [35, 12, 43, 8] | 51 | []
Step 3: Recursively Sort Left Side [35, 12, 43, 8]
Pivot = 8
Smaller than 8 → []
Greater than 8 → [35, 12, 43]
Result: [] | 8 | [35, 12, 43]
Step 4: Sort [35, 12, 43]
Pivot = 43
Smaller than 43 → [35, 12]
Greater than 43 → []
Result: [35, 12] | 43 | []
Easy2Siksha.com
Step 5: Sort [35, 12]
Pivot = 12
Smaller than 12 → []
Greater than 12 → [35]
Result: [] | 12 | [35]
4. Final Sorted Array
Combining all steps: [8, 12, 35, 43, 51]
󷷑󷷒󷷓󷷔 Quick Sort efficiently sorted the array by repeatedly partitioning and organizing around
pivots.
5. Advantages of Quick Sort
Very fast for large datasets.
Average time complexity: O(n log n).
In-place sorting (does not require extra memory like Merge Sort).
6. Disadvantages
Worst-case time complexity: O(n²) (if pivot selection is poor).
Recursive calls may cause stack overflow for very large arrays.
󷈷󷈸󷈹󷈺󷈻󷈼 Part B: Polish Notation
1. What is Polish Notation?
Polish Notation, also called Prefix Notation, is a way of writing mathematical expressions
where the operator comes before the operands.
󷷑󷷒󷷓󷷔 Example: Instead of writing 3 + 4, in Polish Notation we write + 3 4.
This method eliminates the need for brackets because the order of operations is clear from
the position of operators and operands.
2. Types of Polish Notation
1. Prefix Notation (Polish): Operator before operands.
o Example: + 3 4 → means 3 + 4.
2. Postfix Notation (Reverse Polish): Operator after operands.
o Example: 3 4 + → means 3 + 4.
Both notations are widely used in computer science, especially in compilers and calculators.
3. Example of Prefix Notation
Easy2Siksha.com
Expression: (5 + 6) * 7
In normal (infix) notation: (5 + 6) * 7
In prefix notation: * + 5 6 7
󷷑󷷒󷷓󷷔 Explanation:
First, + 5 6 means 5 + 6 = 11.
Then, * 11 7 means 11 * 7 = 77.
So the result is 77.
4. Example of Postfix Notation
Expression: (5 + 6) * 7
In postfix notation: 5 6 + 7 *
󷷑󷷒󷷓󷷔 Explanation:
First, 5 6 + → 11.
Then, 11 7 * → 77.
Again, the result is 77.
5. Advantages of Polish Notation
No need for brackets or operator precedence rules.
Easy for computers to evaluate using stacks.
Simplifies compiler design and expression parsing.
󹶓󹶔󹶕󹶖󹶗󹶘 A Relatable Story
Imagine you’re explaining math to a robot. If you say (5 + 6) * 7, the robot gets confused:
“Should I add first or multiply first?” You need brackets to clarify.
But if you say * + 5 6 7, the robot knows exactly what to do:
1. Add 5 and 6.
2. Multiply the result by 7.
󷷑󷷒󷷓󷷔 That’s the beauty of Polish Notationit makes instructions crystal clear.
󹵍󹵉󹵎󹵏󹵐 Summary Table
Concept
Explanation
Example
Easy2Siksha.com
Quick Sort
Divide-and-conquer sorting
algorithm
[35, 12, 43, 8, 51] → [8, 12,
35, 43, 51]
Prefix Notation
Operator before operands
* + 5 6 7
Postfix Notation
Operator after operands
5 6 + 7 *
Advantage of Quick
Sort
Fast, efficient
O(n log n) average
Advantage of Polish
Notation
No brackets needed
Easy for computers
󷇮󷇭 Final Thoughts
Quick Sort and Polish Notation may seem technical, but they are elegant solutions to
everyday problems in computing. Quick Sort organizes data quickly and efficiently, while
Polish Notation simplifies mathematical expressions for both humans and machines.
SECTION-C
5. What is selecon sort? Write down the algorithm to sort a list using selecon sort.
Discuss its complexity.
Ans: What is Selection Sort? (Simple Meaning)
Imagine you have a messy group of students standing randomly according to their heights,
and you want them to stand in a line from shortest to tallest. What would you do? Maybe
you would scan the entire group, find the shortest student, bring them to the front, then
look at the remaining students, again find the shortest among them, fix them next, and
continue repeating this process until everyone is arranged properly.
This is exactly what selection sort does with numbers instead of students.
Selection sort works by repeatedly selecting the smallest (or largest, depending on
requirement) element from the unsorted part of the list and placing it at its correct position
in the sorted part of the list. Slowly, the list becomes fully sorted.
So, the key idea is simple:
Find the smallest → Place it in the correct position → Repeat.
How Selection Sort Works (Story Style Explanation)
Let’s say you have these numbers:
25, 10, 55, 30, 5
Easy2Siksha.com
You want to arrange them in increasing order.
Step 1:
Look at the whole list.
The smallest number is 5.
Swap 5 with the first element (25).
Now list becomes:
5, 10, 55, 30, 25
Now 5 is at the correct position.
Step 2:
Now ignore the first element because it is already sorted.
Look at the rest:
10, 55, 30, 25
The smallest is 10, which is already at the second position.
So no change:
5, 10, 55, 30, 25
Step 3:
Now look at:
55, 30, 25
The smallest is 25.
Swap 55 and 25:
5, 10, 25, 30, 55
Step 4:
Look at:
30, 55
Smallest is 30, already in place.
Easy2Siksha.com
Finally, only 55 is left. That means sorting is finished.
So, the sorted list is:
5, 10, 25, 30, 55
You can clearly see how gradually the list gets sorted from left to right.
Algorithm of Selection Sort
Now let us convert this logical process into a formal algorithm. But don’t worry, we will keep
it simple.
Algorithm (Steps):
1. Start from the first position in the list.
2. Assume the element at this position is the smallest.
3. Compare it with all other elements to find the actual smallest.
4. When you find the smallest element, swap it with the element at the current
position.
5. Move to the next position.
6. Repeat the process until the entire list is sorted.
More formally:
for i = 0 to n-2
minIndex = i
for j = i+1 to n-1
if element[j] < element[minIndex]
minIndex = j
swap element[i] with element[minIndex]
end
This algorithm simply says:
Pick a position
Find smallest
Put it there
Move ahead
Repeat
Very neat and clean!
Time Complexity of Selection Sort
Easy2Siksha.com
Now we come to an important topic: complexity.
Time complexity tells us:
How much time (or number of operations) an algorithm takes to sort data?
In selection sort:
For the first element, we compare it with all remaining (n − 1) elements.
For the second element, we compare it with (n − 2) elements.
For the third, we compare with (n − 3), and so on…
So total comparisons are:
(n − 1) + (n − 2) + (n − 3) + ... + 1
This equals:
n(n − 1) / 2
When simplified, this becomes:
O(n²)
So the time complexity of selection sort in:
Best Case = O(n²)
Average Case = O(n²)
Worst Case = O(n²)
This means selection sort is slow when the list is large because the number of comparisons
increases very fast.
Space Complexity
Selection sort does not need any extra memory. It only swaps elements within the same list.
So space complexity is:
O(1)
This means it is memory-efficient.
Important Characteristics of Selection Sort
Easy2Siksha.com
1. Simple and easy to understand
2. Works well for small lists
3. Not suitable for large data
4. Always takes same time, even if list is already sorted
5. In-place sorting (no extra space needed)
Real-Life Example
Think of arranging playing cards on a table.
Instead of picking and inserting cards like insertion sort does, here you simply keep
searching for the smallest card and placing it in its correct position one by one.
Very natural, right?
Conclusion
Selection sort is like a very disciplined student who follows a fixed strategy:
Find the smallest element
Place it properly
Repeat until everything is sorted
It is simple, logical, and easy to visualize, which is why it is perfect for learning the basic idea
of sorting. Although it is not very efficient for large datasets due to its O(n²) time
complexity, it remains one of the most important foundational sorting algorithms in
computer science. Once a student understands selection sort, learning advanced algorithms
like Quick Sort and Merge Sort becomes much easier.
6. What is binary search? How does a binary search algorithm work? Give the syntax by
taking an example. Write and explain an algorithm for searching an element
using binary search.
Ans: 󷈷󷈸󷈹󷈺󷈻󷈼 Introduction
Imagine you are searching for a word in a dictionary. You don’t start from the first page and
check every word one by one. Instead, you open the dictionary in the middle, see if the
word comes before or after that page, and then narrow down your search. This clever way
of searching is exactly how Binary Search works in computer science.
Binary Search is one of the most efficient searching techniques, especially when dealing with
large, sorted datasets. It saves time by repeatedly dividing the search space in half until the
desired element is found.
Easy2Siksha.com
󷷑󷷒󷷓󷷔 In simple words: Binary Search is like playing a guessing game where you keep halving
the possibilities until you find the answer.
󷈷󷈸󷈹󷈺󷈻󷈼 What is Binary Search?
Definition: Binary Search is a searching algorithm used to find the position of a
target element in a sorted array.
It works by comparing the target element with the middle element of the array.
If the target matches the middle element, the search is successful. Otherwise, the
search continues in either the left half or the right half of the array.
󷷑󷷒󷷓󷷔 Key condition: The array must be sorted (ascending or descending). Without sorting,
Binary Search cannot work correctly.
󷈷󷈸󷈹󷈺󷈻󷈼 How Does Binary Search Work?
Let’s break it down step by step:
1. Start with the middle element:
o Find the middle index of the array.
o Compare the target element with the middle element.
2. Check for equality:
o If the middle element equals the target, return its position.
3. If the target is smaller:
o Search only in the left half of the array.
4. If the target is larger:
o Search only in the right half of the array.
5. Repeat the process:
o Continue halving the array until the element is found or the search space
becomes empty.
󷈷󷈸󷈹󷈺󷈻󷈼 Example of Binary Search
Suppose we have a sorted array: [5, 12, 23, 34, 45, 56, 67, 78, 89]
We want to search for 45.
Step 1: Middle element = 34 (index 3).
Target 45 > 34. So, search in the right half.
Step 2: New sub-array = [45, 56, 67, 78, 89].
Middle element = 67.
Target 45 < 67. So, search in the left half.
Step 3: New sub-array = [45, 56].
Easy2Siksha.com
Middle element = 56.
Target 45 < 56. So, search in the left half.
Step 4: New sub-array = [45].
Middle element = 45.
Target found at index 4.
󷷑󷷒󷷓󷷔 Result: Element 45 is found at position 4 in the array.
󷈷󷈸󷈹󷈺󷈻󷈼 Syntax of Binary Search (in C-like pseudocode)
c
int binarySearch(int arr[], int n, int target) {
int low = 0;
int high = n - 1;
while (low <= high) {
int mid = (low + high) / 2;
if (arr[mid] == target)
return mid; // Element found
else if (arr[mid] < target)
low = mid + 1; // Search right half
else
high = mid - 1; // Search left half
}
return -1; // Element not found
}
󷷑󷷒󷷓󷷔 This syntax shows the iterative approach to Binary Search.
󷈷󷈸󷈹󷈺󷈻󷈼 Algorithm for Binary Search
Let’s write the algorithm step by step in plain language:
Algorithm: Binary Search
1. Start with two pointers: low = 0 and high = n - 1.
2. Repeat until low <= high: a. Find the middle index: mid = (low + high) / 2. b. If
arr[mid] == target, return mid. c. If arr[mid] < target, set low = mid + 1. d. If arr[mid]
> target, set high = mid - 1.
3. If the element is not found, return -1.
󷈷󷈸󷈹󷈺󷈻󷈼 Explanation of Algorithm with Example
Easy2Siksha.com
Let’s search for 78 in the array: [5, 12, 23, 34, 45, 56, 67, 78, 89]
Iteration 1:
o low = 0, high = 8, mid = 4.
o arr[mid] = 45. Target 78 > 45.
o New range: low = 5, high = 8.
Iteration 2:
o mid = (5 + 8) / 2 = 6.
o arr[mid] = 67. Target 78 > 67.
o New range: low = 7, high = 8.
Iteration 3:
o mid = (7 + 8) / 2 = 7.
o arr[mid] = 78. Target found.
󷷑󷷒󷷓󷷔 Result: Element 78 is found at index 7.
󷈷󷈸󷈹󷈺󷈻󷈼 Time and Space Complexity
Time Complexity:
o Best Case: O(1) (element found at the middle).
o Worst Case: O(log n) (keep halving until found).
o Average Case: O(log n).
Space Complexity:
o Iterative version: O(1).
o Recursive version: O(log n) (due to function call stack).
󷷑󷷒󷷓󷷔 Binary Search is extremely efficient compared to Linear Search (O(n)).
󹶓󹶔󹶕󹶖󹶗󹶘 A Relatable Story
Imagine you are guessing a number between 1 and 100. If you guess randomly, it may take
many tries. But if you always guess the middle number, you can halve the range each time.
First guess 50, then 75, then 62, and so on. Within just 7 tries, you’ll find the number.
󷷑󷷒󷷓󷷔 This is exactly how Binary Search workscutting the problem in half each time until the
answer is found.
󹵍󹵉󹵎󹵏󹵐 Summary Table
Step
Action
Result
1
Find middle element
Compare with target
2
If equal
Return position
3
If smaller
Search left half
4
If larger
Search right half
5
Repeat
Until found or empty
󷇮󷇭 Final Thoughts
Easy2Siksha.com
Binary Search is a brilliant example of how logic can make tasks faster and easier. By halving
the search space at each step, it drastically reduces the number of comparisons needed. It is
widely used in programming, databases, and even everyday problem-solving.
SECTION-D
7. What is polymorphism ? What are its types ? Write and explain C program to implement
the concept of polymorphism.
Ans: 󹲉󹲊󹲋󹲌󹲍 What is Polymorphism?
Imagine you have one word, but it can behave differently in different situations. For
example, think about the word “run.”
A person can run.
A computer program can run.
A machine can run.
Same word, but different meanings depending on where it is used.
This idea is exactly what polymorphism is in programming.
The word polymorphism is made of two Greek words:
Poly = many
Morph = forms
So, polymorphism means “one thing having many forms.”
In programming, polymorphism allows a single function, name, or symbol to behave
differently based on the context. This reduces confusion, increases flexibility, and makes
programs easier to manage and extend. Instead of writing many different functions for
similar tasks, we can use one interface and different implementations.
󼪍󼪎󼪏󼪐󼪑󼪒󼪓 Types of Polymorphism
Generally, polymorphism is of two main types:
󷄧󷄫 Compile-Time Polymorphism (Static Polymorphism)
This type of polymorphism is handled during compilation. The decision of which function to
execute is taken before the program runs. Examples include:
Easy2Siksha.com
Function Overloading
Operator Overloading
However, C language does not support function overloading or operator overloading
directly like C++. So true compile-time polymorphism is not naturally available in C. But C++
supports it well.
󷄧󷄬 Run-Time Polymorphism (Dynamic Polymorphism)
Here, the decision about which function will execute is taken while the program is running.
This usually happens through virtual functions and inheritance in OOP languages like C++.
But again, C does not support classes and inheritance. Still, C is powerful, and we can
simulate polymorphism using function pointers and structures.
That means we can still implement the “behavior changing” nature of polymorphism in C,
even though C is not an object-oriented language.
󼩼󼩽󼩾󼪀󼩿 How Do We Show Polymorphism in C?
Since C doesn’t have built-in polymorphism like C++, we use:
Structures
Function pointers
We create different functions, but call them using the same function pointer name. This
creates a polymorphic effect.
Let’s take a very simple real-world inspired example:
Suppose we have different shapes like:
Circle
Rectangle
Both shapes can “draw”, but they draw differently.
Same action: draw(),
Different behavior depending on object.
We will simulate this idea in C.
󷄧󼿒 C Program Demonstrating Polymorphism
Easy2Siksha.com
Here is a simple and clear C program showing polymorphism using structure and function
pointer:
#include <stdio.h>
// Function for drawing a circle
void drawCircle() {
printf("Drawing a Circle\n");
}
// Function for drawing a rectangle
void drawRectangle() {
printf("Drawing a Rectangle\n");
}
// Structure with function pointer
struct Shape {
void (*draw)(); // function pointer
};
int main() {
// Creating two structure variables
struct Shape shape1;
struct Shape shape2;
// Assigning function addresses
shape1.draw = drawCircle;
shape2.draw = drawRectangle;
// Calling the same function name but different behaviors
printf("Shape 1: ");
shape1.draw();
printf("Shape 2: ");
shape2.draw();
return 0;
}
󹴞󹴟󹴠󹴡󹶮󹶯󹶰󹶱󹶲 Explanation of the Program (In Simple Language)
Let’s break the program like a story:
󷄧󷄫 First, we included stdio.h because we need printf to display output.
Easy2Siksha.com
󷄧󷄬 We created two functions:
drawCircle() → prints “Drawing a Circle”
drawRectangle() → prints “Drawing a Rectangle”
Both functions represent different behaviors.
󷄧󷄭 Then we created a struct Shape which contains:
void (*draw)();
This line means:
We are creating a function pointer
It can point to any function that does not take any argument and does not return
anything
The name of this pointer is draw
Think of it like a remote control.
The same remote can control different TVs if paired differently.
󷄧󷄮 In main():
We created two shapes:
struct Shape shape1;
struct Shape shape2;
󷄰󷄯 Then we assigned:
shape1.draw = drawCircle;
shape2.draw = drawRectangle;
So:
shape1 now behaves like a Circle
shape2 behaves like a Rectangle
󷄧󷄱 Finally, we called:
shape1.draw();
shape2.draw();
Even though we are using the same name draw(), the output is different.
This is polymorphism in action!
Same interface → different behavior.
Easy2Siksha.com
Output of the program will be:
Shape 1: Drawing a Circle
Shape 2: Drawing a Rectangle
󷘹󷘴󷘵󷘶󷘷󷘸 Why is Polymorphism Important?
Polymorphism is very powerful because:
󽆤 It makes programs flexible
󽆤 It supports code reusability
󽆤 It makes maintenance easier
󽆤 It allows a uniform interface but different implementations
󽆤 It reduces complexity
8. (A) What do you mean by Operator Overloading? Explain with the help of an example.
(B) What is a class? What is the relaon between an object and a class? Write a program
which shows how to dene a class, how to access member funcons and how to create
and access objects in C++?
Ans: 󷈷󷈸󷈹󷈺󷈻󷈼 Introduction
C++ is a powerful programming language that supports Object-Oriented Programming
(OOP). Two important concepts in OOP are operator overloading and classes/objects.
Operator overloading allows us to give new meaning to existing operators (like +, -, *) when
applied to user-defined types. Classes and objects, on the other hand, are the backbone of
OOPthey help us model real-world entities in code.
󷷑󷷒󷷓󷷔 In simple words: Operator overloading makes our code more natural to read, while
classes and objects make it more structured and organized.
󷈷󷈸󷈹󷈺󷈻󷈼 Part A: Operator Overloading
1. What is Operator Overloading?
Operator overloading is a feature in C++ that allows us to redefine the behavior of
operators for user-defined data types.
Normally, operators like + or - work with primitive types (integers, floats). But with
operator overloading, we can make them work with objects too.
Easy2Siksha.com
󷷑󷷒󷷓󷷔 Example: If we have a Complex number class, we can overload the + operator to add
two complex numbers directly using c1 + c2.
2. Why Do We Need Operator Overloading?
It makes code more intuitive and readable.
Instead of writing c1.add(c2), we can simply write c1 + c2.
It allows objects to behave like built-in types.
3. Example of Operator Overloading
cpp
#include <iostream>
using namespace std;
class Complex {
int real, imag;
public:
Complex(int r = 0, int i = 0) {
real = r;
imag = i;
}
// Overloading the + operator
Complex operator + (const Complex &obj) {
Complex result;
result.real = real + obj.real;
result.imag = imag + obj.imag;
return result;
}
void display() {
cout << real << " + " << imag << "i" << endl;
}
};
int main() {
Complex c1(3, 4), c2(1, 2);
Complex c3 = c1 + c2; // Using overloaded + operator
c3.display(); // Output: 4 + 6i
return 0;
}
4. Explanation of Example
We created a Complex class with real and imaginary parts.
Easy2Siksha.com
We overloaded the + operator to add two complex numbers.
When we write c1 + c2, the overloaded function is called, returning a new Complex
object.
The result is displayed as 4 + 6i.
󷷑󷷒󷷓󷷔 This shows how operator overloading makes object manipulation natural and intuitive.
󷈷󷈸󷈹󷈺󷈻󷈼 Part B: Classes and Objects
1. What is a Class?
A class is a blueprint or template for creating objects.
It defines attributes (data members) and behaviors (member functions).
Example: A Car class may have attributes like color, speed, and functions like drive()
or brake().
󷷑󷷒󷷓󷷔 Think of a class as a recipe, and objects as the actual dishes prepared using that recipe.
2. What is an Object?
An object is an instance of a class.
It represents a real-world entity with specific values for attributes.
Example: If Car is a class, then myCar is an object with color = red and speed = 120
km/h.
3. Relation Between Class and Object
A class defines the structure; an object brings it to life.
Without a class, objects cannot exist. Without objects, classes are just abstract ideas.
Multiple objects can be created from the same class, each with different attribute
values.
󷷑󷷒󷷓󷷔 Example: Class = Student. Objects = Ravi, Priya, Aman. All are students but with different
names and marks.
4. Example Program in C++
cpp
#include <iostream>
using namespace std;
// Defining a class
class Student {
string name;
int age;
public:
Easy2Siksha.com
// Member function to set data
void setData(string n, int a) {
name = n;
age = a;
}
// Member function to display data
void display() {
cout << "Name: " << name << ", Age: " << age << endl;
}
};
int main() {
// Creating objects of Student class
Student s1, s2;
// Accessing member functions
s1.setData("Ravi", 20);
s2.setData("Priya", 19);
// Displaying object data
s1.display(); // Output: Name: Ravi, Age: 20
s2.display(); // Output: Name: Priya, Age: 19
return 0;
}
5. Explanation of Example
We defined a Student class with attributes name and age.
We created two objects s1 and s2.
Using member functions, we set and displayed their data.
Each object had its own values, showing how objects are independent instances of a
class.
󷷑󷷒󷷓󷷔 This demonstrates how classes provide structure, and objects bring that structure into
reality.
󷈷󷈸󷈹󷈺󷈻󷈼 Key Differences Between Class and Object
Aspect
Class
Object
Definition
Blueprint/template
Instance of a class
Existence
Abstract, does not occupy memory
Concrete, occupies memory
Example
Car
myCar, yourCar
Purpose
Defines attributes and behaviors
Represents real-world entity
󹶓󹶔󹶕󹶖󹶗󹶘 A Relatable Story
Easy2Siksha.com
Think of a class as a house design made by an architect. It shows where the rooms, doors,
and windows will be. But until someone builds the house, it’s just a design. When the house
is actually built, that’s an object.
󷷑󷷒󷷓󷷔 Similarly, a class is the design, and an object is the real thing.
󷈷󷈸󷈹󷈺󷈻󷈼 Final Thoughts
Operator Overloading makes user-defined types behave like built-in types,
improving readability and usability.
Classes and Objects are the foundation of OOP, helping us model real-world entities
in code.
Together, these concepts make C++ powerful, flexible, and closer to human thinking.
󷷑󷷒󷷓󷷔 In essence: Operator overloading gives elegance to our code, while classes and objects
give it structure and meaning.
This paper has been carefully prepared for educaonal purposes. If you noce any
mistakes or have suggesons, feel free to share your feedback.